134 research outputs found

    Two Theorems in List Decoding

    Full text link
    We prove the following results concerning the list decoding of error-correcting codes: (i) We show that for \textit{any} code with a relative distance of δ\delta (over a large enough alphabet), the following result holds for \textit{random errors}: With high probability, for a \rho\le \delta -\eps fraction of random errors (for any \eps>0), the received word will have only the transmitted codeword in a Hamming ball of radius ρ\rho around it. Thus, for random errors, one can correct twice the number of errors uniquely correctable from worst-case errors for any code. A variant of our result also gives a simple algorithm to decode Reed-Solomon codes from random errors that, to the best of our knowledge, runs faster than known algorithms for certain ranges of parameters. (ii) We show that concatenated codes can achieve the list decoding capacity for erasures. A similar result for worst-case errors was proven by Guruswami and Rudra (SODA 08), although their result does not directly imply our result. Our results show that a subset of the random ensemble of codes considered by Guruswami and Rudra also achieve the list decoding capacity for erasures. Our proofs employ simple counting and probabilistic arguments.Comment: 19 pages, 0 figure

    It'll probably work out: improved list-decoding through random operations

    Full text link
    In this work, we introduce a framework to study the effect of random operations on the combinatorial list-decodability of a code. The operations we consider correspond to row and column operations on the matrix obtained from the code by stacking the codewords together as columns. This captures many natural transformations on codes, such as puncturing, folding, and taking subcodes; we show that many such operations can improve the list-decoding properties of a code. There are two main points to this. First, our goal is to advance our (combinatorial) understanding of list-decodability, by understanding what structure (or lack thereof) is necessary to obtain it. Second, we use our more general results to obtain a few interesting corollaries for list decoding: (1) We show the existence of binary codes that are combinatorially list-decodable from 1/2ϵ1/2-\epsilon fraction of errors with optimal rate Ω(ϵ2)\Omega(\epsilon^2) that can be encoded in linear time. (2) We show that any code with Ω(1)\Omega(1) relative distance, when randomly folded, is combinatorially list-decodable 1ϵ1-\epsilon fraction of errors with high probability. This formalizes the intuition for why the folding operation has been successful in obtaining codes with optimal list decoding parameters; previously, all arguments used algebraic methods and worked only with specific codes. (3) We show that any code which is list-decodable with suboptimal list sizes has many subcodes which have near-optimal list sizes, while retaining the error correcting capabilities of the original code. This generalizes recent results where subspace evasive sets have been used to reduce list sizes of codes that achieve list decoding capacity
    corecore